Nearly optimal Bayesian shrinkage for high-dimensional regression

نویسندگان

چکیده

During the past decade, shrinkage priors have received much attention in Bayesian analysis of high-dimensional data. This paper establishes posterior consistency for linear regression with a class priors, which has heavy and flat tail allocates sufficiently large probability mass very small neighborhood zero. While enjoying its efficiency simulations, prior can lead to nearly optimal contraction rate variable selection as spike-and-slab prior. Our numerical results show that under consistency, methods yield better than regularization such LASSO SCAD. also BvM-type result, leads convenient way uncertainty quantification coefficient estimates.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nearly Optimal Minimax Estimator for High Dimensional Sparse Linear Regression

We present estimators for a well studied statistical estimation problem: the estimation for the linear regression model with soft sparsity constraints (`q constraint with 0 < q ≤ 1) in the high-dimensional setting. We first present a family of estimators, called the projected nearest neighbor estimator and show, by using results from Convex Geometry, that such estimator is within a logarithmic ...

متن کامل

High dimensional thresholded regression and shrinkage effect

High dimensional sparse modelling via regularization provides a powerful tool for analysing large-scale data sets and obtaining meaningful interpretable models.The use of nonconvex penalty functions shows advantage in selecting important features in high dimensions, but the global optimality of such methods still demands more understanding.We consider sparse regression with a hard thresholding ...

متن کامل

The Sparse Laplacian Shrinkage Estimator for High-Dimensional Regression.

We propose a new penalized method for variable selection and estimation that explicitly incorporates the correlation patterns among predictors. This method is based on a combination of the minimax concave penalty and Laplacian quadratic associated with a graph as the penalty function. We call it the sparse Laplacian shrinkage (SLS) method. The SLS uses the minimax concave penalty for encouragin...

متن کامل

Bayesian shrinkage prediction for the regression problem

We consider Bayesian shrinkage predictions for the Normal regression problem under the frequentist Kullback-Leibler risk function. Firstly, we consider the multivariate Normal model with an unknown mean and a known covariance. While the unknown mean is fixed, the covariance of future samples can be different from training samples. We show that the Bayesian predictive distribution based on the u...

متن کامل

Methods for regression analysis in high-dimensional data

By evolving science, knowledge and technology, new and precise methods for measuring, collecting and recording information have been innovated, which have resulted in the appearance and development of high-dimensional data. The high-dimensional data set, i.e., a data set in which the number of explanatory variables is much larger than the number of observations, cannot be easily analyzed by ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Science China-mathematics

سال: 2022

ISSN: ['1674-7283', '1869-1862']

DOI: https://doi.org/10.1007/s11425-020-1912-6